Diffractive optical networks provide rich opportunities for visual computing tasks since the spatial information of a scene can be directly accessed by a diffractive processor without requiring any digital pre-processing steps. Here we present data class-specific transformations all-optically performed between the input and output fields-of-view (FOVs) of a diffractive network. The visual information of the objects is encoded into the amplitude (A), phase (P), or intensity (I) of the optical field at the input, which is all-optically processed by a data class-specific diffractive network. At the output, an image sensor-array directly measures the transformed patterns, all-optically encrypted using the transformation matrices pre-assigned to different data classes, i.e., a separate matrix for each data class. The original input images can be recovered by applying the correct decryption key (the inverse transformation) corresponding to the matching data class, while applying any other key will lead to loss of information. The class-specificity of these all-optical diffractive transformations creates opportunities where different keys can be distributed to different users; each user can only decode the acquired images of only one data class, serving multiple users in an all-optically encrypted manner. We numerically demonstrated all-optical class-specific transformations covering A-->A, I-->I, and P-->I transformations using various image datasets. We also experimentally validated the feasibility of this framework by fabricating a class-specific I-->I transformation diffractive network using two-photon polymerization and successfully tested it at 1550 nm wavelength. Data class-specific all-optical transformations provide a fast and energy-efficient method for image and data encryption, enhancing data security and privacy.
translated by 谷歌翻译
病理诊断依赖于组织学染色的薄组织样品的目视检查,其中使用不同类型的污渍来对比并突出各种所需的组织学特征。但是,破坏性的组织化学染色程序通常是不可逆的,因此很难在同一组织段上获得多个污渍。在这里,我们通过层叠的深神经网络(C-DNN)演示了虚拟的染色转移框架,以数字化将苏木精和曙红(H&E)染色的组织图像转化为其他类型的组织学染色。与单个神经网络结构不同,该结构仅将一种染色类型作为一种输入来输出另一种染色类型的数字输出图像,C-DNN首先使用虚拟染色将自动荧光显微镜图像转换为H&E,然后执行从H&E到另一个域的染色转换以级联的方式染色。在训练阶段的这种级联结构使该模型可以直接利用H&E和目标特殊污渍的组织化学染色图像数据。该优势减轻了配对数据获取的挑战,并提高了从H&E到另一个污渍的虚拟污渍转移的图像质量和色彩准确性。我们使用肾针芯活检组织切片验证了这种C-DNN方法的出色性能,并将H&E染色的组织图像成功地转移到虚拟PAS(周期性酸 - 雪)染色中。该方法使用现有的,组织化学染色的幻灯片提供了特殊污渍的高质量虚拟图像,并通过执行高度准确的污渍转换来创造数字病理学的新机会。
translated by 谷歌翻译
教学机器根据少数训练样本认识到一个新的类别,特别是由于缺乏数据缺乏的新型类别的难题了解,只有一个仍然挑战。然而,人类可以快速学习新课程,甚至在人类可以讲述基于视觉和语义先前知识的关于每个类别的歧视特征时,甚至给出了一些样本。为了更好地利用这些先验知识,我们提出了语义引导的注意力(SEGA)机制,其中语义知识用于以自上而下的方式引导视觉感知,在区分类别时应注意哪些视觉特征。结果,即使少量样品也可以更具判别嵌入新类。具体地,借助从基类传输可视化的先验知识,接受了一个特征提取器,以培训以将每个小组类的数量的每个小组的图像嵌入到视觉原型中。然后,我们学习一个网络将语义知识映射到特定于类别的注意力矢量,该向量将用于执行功能选择以增强视觉原型。在Miniimagenet,Tieredimagenet,CiFar-FS和Cub上进行了广泛的实验表明,我们的语义引导的注意力实现了预期的功能和优于最先进的结果。
translated by 谷歌翻译
A matrix free and a low rank approximation preconditioner are proposed to accelerate the convergence of stochastic gradient descent (SGD) by exploiting curvature information sampled from Hessian-vector products or finite differences of parameters and gradients similar to the BFGS algorithm. Both preconditioners are fitted with an online updating manner minimizing a criterion that is free of line search and robust to stochastic gradient noise, and further constrained to be on certain connected Lie groups to preserve their corresponding symmetry or invariance, e.g., orientation of coordinates by the connected general linear group with positive determinants. The Lie group's equivariance property facilitates preconditioner fitting, and its invariance property saves any need of damping, which is common in second-order optimizers, but difficult to tune. The learning rate for parameter updating and step size for preconditioner fitting are naturally normalized, and their default values work well in most situations.
translated by 谷歌翻译
In this paper, we work on a sound recognition system that continually incorporates new sound classes. Our main goal is to develop a framework where the model can be updated without relying on labeled data. For this purpose, we propose adopting representation learning, where an encoder is trained using unlabeled data. This learning framework enables the study and implementation of a practically relevant use case where only a small amount of the labels is available in a continual learning context. We also make the empirical observation that a similarity-based representation learning method within this framework is robust to forgetting even if no explicit mechanism against forgetting is employed. We show that this approach obtains similar performance compared to several distillation-based continual learning methods when employed on self-supervised representation learning methods.
translated by 谷歌翻译
The current popular two-stream, two-stage tracking framework extracts the template and the search region features separately and then performs relation modeling, thus the extracted features lack the awareness of the target and have limited target-background discriminability. To tackle the above issue, we propose a novel one-stream tracking (OSTrack) framework that unifies feature learning and relation modeling by bridging the template-search image pairs with bidirectional information flows. In this way, discriminative target-oriented features can be dynamically extracted by mutual guidance. Since no extra heavy relation modeling module is needed and the implementation is highly parallelized, the proposed tracker runs at a fast speed. To further improve the inference efficiency, an in-network candidate early elimination module is proposed based on the strong similarity prior calculated in the one-stream framework. As a unified framework, OSTrack achieves state-of-the-art performance on multiple benchmarks, in particular, it shows impressive results on the one-shot tracking benchmark GOT-10k, i.e., achieving 73.7% AO, improving the existing best result (SwinTrack) by 4.3\%. Besides, our method maintains a good performance-speed trade-off and shows faster convergence. The code and models are available at https://github.com/botaoye/OSTrack.
translated by 谷歌翻译
最近,对建立问题的兴趣越来越兴趣,其中跨多种模式(如文本和图像)的原因。但是,使用图像的QA通常仅限于从预定义的选项集中挑选答案。此外,在现实世界中的图像,特别是在新闻中,具有与文本共同参考的对象,其中来自两个模态的互补信息。在本文中,我们提出了一种新的QA评估基准,并在新闻文章中提出了1,384个问题,这些文章需要跨媒体接地图像中的物体接地到文本上。具体地,该任务涉及需要推理图像标题对的多跳问题,以识别接地的视觉对象,然后从新闻正文文本中预测跨度以回答问题。此外,我们介绍了一种新颖的多媒体数据增强框架,基于跨媒体知识提取和合成问题答案生成,自动增强可以为此任务提供弱监管的数据。我们在我们的基准测试中评估了基于管道和基于端到端的预先预测的多媒体QA模型,并表明他们实现了有希望的性能,而在人类性能之后大幅滞后,因此留下了未来工作的大型空间,以便在这一具有挑战性的新任务上的工作。
translated by 谷歌翻译
我们介绍了一个高分辨率变压器(HRFormer),其学习了密集预测任务的高分辨率表示,与产生低分辨率表示的原始视觉变压器,具有高存储器和计算成本。我们利用在高分辨率卷积网络(HRNET)中引入的多分辨率并行设计,以及本地窗口自我关注,用于通过小型非重叠图像窗口进行自我关注,以提高存储器和计算效率。此外,我们将卷积介绍到FFN中以在断开连接的图像窗口中交换信息。我们展示了高分辨率变压器对人类姿态估计和语义分割任务的有效性,例如,HRFormer在Coco姿势估算中以$ 50 \%$ 50 + 50美元和30 \%$更少的拖鞋。代码可用:https://github.com/hrnet/hRFormer。
translated by 谷歌翻译
Image-level weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years. Most of advanced solutions exploit class activation map (CAM). However, CAMs can hardly serve as the object mask due to the gap between full and weak supervisions. In this paper, we propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap. Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation, whose pixel-level labels take the same spatial transformation as the input images during data augmentation. However, this constraint is lost on the CAMs trained by image-level supervision. Therefore, we propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning. Moreover, we propose a pixel correlation module (PCM), which exploits context appearance information and refines the prediction of current pixel by its similar neighbors, leading to further improvement on CAMs consistency. Extensive experiments on PASCAL VOC 2012 dataset demonstrate our method outperforms state-of-the-art methods using the same level of supervision. The code is released online 1 .
translated by 谷歌翻译
Few-shot classification aims to recognize unlabeled samples from unseen classes given only few labeled samples. The unseen classes and low-data problem make few-shot classification very challenging. Many existing approaches extracted features from labeled and unlabeled samples independently, as a result, the features are not discriminative enough. In this work, we propose a novel Cross Attention Network to address the challenging problems in few-shot classification. Firstly, Cross Attention Module is introduced to deal with the problem of unseen classes. The module generates cross attention maps for each pair of class feature and query sample feature so as to highlight the target object regions, making the extracted feature more discriminative. Secondly, a transductive inference algorithm is proposed to alleviate the low-data problem, which iteratively utilizes the unlabeled query set to augment the support set, thereby making the class features more representative. Extensive experiments on two benchmarks show our method is a simple, effective and computationally efficient framework and outperforms the state-of-the-arts.
translated by 谷歌翻译